Huge Covert Inside-Google Teams Engaged in Manual Interventions on Google Search Results To Rig Elections And Stock Market Results

- The "Google Algorithms" are manually created software instructions that allow Google's global server farms to monopolistically control news, information and perceptions around the world for the benefit of Google's elitist globalist bosses

- The Google empire controls most of the media on Earth, via many front corporations, and indoctrinates everyone in it's organization using 'cult' methodologies. Google owner's believe in "our-ideology-at-any-cost" and "the-ends-justify-the-means" scenarios. What could possibly go wrong?

- EYE-WITNESS GOOGLE STAFF AND PARTNER RECORDINGS AND TESTIMONY PROVE THAT GOOGLE IS A CRIMINAL INFORMATION MANIPULATION, STOCK MARKET-RIGGING, TAX-EVASION MONOPOLY THAT BRIBES CONGRESS

- ERIC SCHMIDT, DAVID DRUMMOND, JARED COHEN, SERGEY BRIN AND LARRY PAGE AT GOOGLE HAVE THIS THEORY THAT "STARTING CIVIL WARS IS GOOD FOR A SOCIETY..." SO THEY USE GOOGLE TO CREATE CULTURAL SPLITS. OTHERS MIGHT CALL THAT "TREASON". THEY CONSPIRE TOGETHER TO MASS MANIPULATE INFORMATION AND DELETE ALL COUNTER-POINT VIEWS FROM THE WEB. THEY LAUGH AT ANY ATTEMPT TO STOP THEM BECAUSE THEY BRIBE 90% OF THE GOVERNMENT.

- LARRY PAGE AND ELON MUSK SHARE SEX PARTNERS, LIVING ARRANGEMENTS, MONIES, DARK MONEY CAMPAIGN FINANCING, CENSORSHIP PROGRAMS TO PROTECT THE CRIMES OF EACH OTHER GETTING COVERED IN THE NEWS AND STOCK OWNERSHIPS.

- GOOGLE BOSSES, INCLUDING ERIC SCHMIDT, TOLD ASSOCIATES: "OBAMA NEVER WOULD HAVE BEEN ELECTED WITHOUT GOOGLE'S DIGITAL MASS PERCEPTION-MANIPULATION AND OPINION-STEERING TECHNOLOGIES..." SEE MORE AT: https://www.thecreepyline.com

-----------------------------


Forensic Proof That Google Is A Cult:

Google was created to become the best-of-the-best, in mind-control, for social and political manipulation.

Steven Hassan, renown cult interdiction specialist and the author of " Combating Cult Mind Control" says:
"...there are universal patterns of manipulation; someone who's skilled (ie: Google) can figure out how to systematically and incrementally manipulate you into a vulnerable isolated place (like you computer screen) and start to control your information, control your behavior, control your thinking...to make you dependent and obedient. There are millions of people in mind control cults like this..."

The biggest lie ever told is the one that you tell yourself when you say that "subliminal messages and digital mind control have no effect on you". They do! The more you deny it, the better it works on you.

The young employees of Google are chosen for their naive and impressionable characteristics and then, as with Facebook, immersed in a synthetic bubble of ideological echo-chambering in order to push the precepts of the "Google Youth".

---------------------------------------


Google wants to mass-"Police" all political "tone" and ideology on the internet.

By Adrian Dennis

Google has “huge teams” working on manual interventions in search results, an apparent contradiction of sworn testimony made to Congress by CEO Sundar Pichai, according to an internal post leaked to Breitbart News.

“There are subjects that are prone to hyperbolic content, misleading information, and offensive content,” said Daniel Aaronson, a member of Google’s Trust & Safety team.

“Now, these words are highly subjective and no one denies that. But we can all agree generally, lines exist in many cultures about what is clearly okay vs. what is not okay.”

“In extreme cases where we need to act quickly on something that is so obviously not okay, the reactive/manual approach is sometimes necessary.”

The comments came to light in a leaked internal discussion thread, started by a Google employee who noticed that the company had recently changed search results for “abortion” on its YouTube video platform, a change which caused pro-life videos to largely disappear from the top ten results.

In addition to the “manual approach,” Aaronson explained that Google also trained automated “classifiers” – algorithms or “scalable solutions” that corrects “problems” in search results.

Aaronson listed three areas where either manual interventions or classifier changes might take place: organic search (“The bar for changing classifiers or manual actions on span in organic search is extremely high”), YouTube, Google Home, and Google Assistant.

Aaronson’s post also reveals that there is very little transparency around decisions to adjust classifiers or manually correct controversial search results, even internally. Aaronson compared Google’s decision-making process in this regard to a closely-guarded “Pepsi Formula.”

These comments, part of a longer post copied below, seem to contradict Google CEO Sundar Pichai’s sworn congressional testimony that his company does not “manually intervene on any particular search result.”

According to an internal discussion thread leaked to Breitbart News by a source within the company, a Google employee took issue with Pichai’s remarks, stating that it “seems like we are pretty eager to cater our search results to the social and political agenda of left-wing journalists.”

According to the posts leaked by the source, revealed that YouTube, a Google subsidiary, manually intervened on search results related to “abortion” and “abortions.” The intervention caused pro-life videos to disappear from the top ten search results for those terms, where they had previously been featured prominently. The posts also show YouTube intervened on search results related to progressive activist David Hogg and Democrat politician Maxine Waters.

In a comment to Breitbart News, a Google spokeswoman also insisted that “Google has never manipulated or modified the search results or content in any of its products to promote a particular political ideology.”

Pichai might claim that he was just talking about Google, not YouTube, which was the focus of the leaked discussion thread. But Aaronson’s post extends to Google’s other products: organic search, Google Home, and Google Assistant.

Aaronson is also clear that the manipulation of the search results that are “prone to abuse/controversial content” is not a small affair, but are the responsibility of “huge teams” within Google.

“These lines are very difficult and can be very blurry, we are all well aware of this. So we’ve got huge teams that stay cognizant of these facts when we’re crafting policies considering classifier changes, or reacting with manual actions”

If Google has “huge teams” that sometimes manually intervene on search results, it’s scarcely plausible to argue that Pichai might not know about them.

THE SMOKING GUN: Google Manipulated YouTube Search Results for Abortion, Maxine Waters, David Hogg In Order To Steer Politics And Stock Gains To Palo Alto Mafia and Pelosi/Feinstein Families

Alex Wong, Win McNamee/Getty, Screenshot/YouTube

In sworn testimony, Google CEO Sundar Pichai told Congress last month that his company does not “manually intervene” on any particular search result. Yet an internal discussion thread leaked to Breitbart News reveals Google regularly intervenes in search results on its YouTube video platform – including a recent intervention that pushed pro-life videos out of the top ten search results for “abortion.”

The term “abortion” was added to a “blacklist” file for “controversial YouTube queries,” which contains a list of search terms that the company considers sensitive. According to the leak, these include some of these search terms related to: abortion, abortions, the Irish abortion referendum, Democratic Congresswoman Maxine Waters, and anti-gun activist David Hogg.

The existence of the blacklist was revealed in an internal Google discussion thread leaked to Breitbart News by a source inside the company who wishes to remain anonymous. A partial list of blacklisted terms was also leaked to Breitbart by another Google source.

In the leaked discussion thread, a Google site reliability engineer hinted at the existence of more search blacklists, according to the source.

“We have tons of white- and blacklists that humans manually curate,” said the employee. “Hopefully this isn’t surprising or particularly controversial.”

Others were more concerned about the presence of the blacklist. According to the source, the software engineer who started the discussion called the manipulation of search results related to abortion a “smoking gun.”

The software engineer noted that the change had occurred following an inquiry from a left-wing Slate journalist about the prominence of pro-life videos on YouTube, and that pro-life videos were replaced with pro-abortion videos in the top ten results for the search terms following Google’s manual intervention.

“The Slate writer said she had complained last Friday and then saw different search results before YouTube responded to her on Monday,” wrote the employee. “And lo and behold, the [changelog] was submitted on Friday, December 14 at 3:17 PM.”

The manually downranked items included several videos from Dr. Antony Levatino, a former abortion doctor who is now a pro-life activist. Another video in the top ten featured a woman’s personal story of being pressured to have an abortion, while another featured pro-life conservative Ben Shapiro. The Slate journalist who complained to Google reported that these videos previously featured in the top ten, describing them in her story as “dangerous misinformation.”

Since the Slate journalist’s inquiry and Google’s subsequent intervention, the top search results now feature pro-abortion content from left-wing sources like BuzzFeed, Vice, CNN, and Last Week Tonight With John Oliver. In her report, the Slate journalist acknowledged that the search results changed shortly after she contacted Google.

The manual adjustment of search results by a Google-owned platform contradicts a key claim made under oath by Google CEO Sundar Pichai in his congressional testimony earlier this month: that his company does not “manually intervene on any search result.”

A Google employee in the discussion thread drew attention to Pichai’s claim, noting that it “seems like we are pretty eager to cater our search results to the social and political agenda of left-wing journalists.”

One of the posts in the discussion also noted that the blacklist had previously been edited to include the search term “Maxine Waters” after a single Google employee complained the top YouTube search result for Maxine Waters was “very low quality.”

Google’s alleged intervention on behalf of a Democratic congresswoman would be further evidence of the tech giant using its resources to prop up the left. Breitbart News previously reported on leaked emails revealing the company targeted pro-Democrat demographics in its get-out-the-vote efforts in 2016.

According to the source, a software engineer in the thread also noted that “a bunch of terms related to the abortion referendum in Ireland” had been added to the blacklist – another change with potentially dramatic consequences on the national policies of a western democracy.

At least one post in the discussion thread revealed the existence of a file called “youtube_controversial_query_blacklist,” which contains a list of YouTube search terms that Google manually curates. In addition to the terms “abortion,” “abortions,” “Maxine Waters,” and search terms related to the Irish abortion referendum, a Google software engineer noted that the blacklist includes search terms related to terrorist attacks. (the posts specifically mentions that the “Strasbourg terrorist attack” as being on the list).

“If you look at the other entries recently added to the youtube_controversial_query_blacklist(e.g., entries related to the Strasbourg terrorist attack), the addition of abortion seems…out-of-place,” wrote the software engineer, according to the source.

After learning of the existence of the blacklist, Breitbart News obtained a partial screenshot of the full blacklist file from a source within Google. It reveals that the blacklist includes search terms related to both mass shootings and the progressive anti-second amendment activist David Hogg.

This suggests Google has followed the lead of Democrat politicians, who have repeatedly pushed tech companies to censor content related to the Parkland school shooting and the Parkland anti-gun activists. It’s part of a popular new line of thought in the political-media establishment, which views the public as too stupid to question conspiracy theories for themselves.


The full internal filepath of the blacklist, according to another source, is:

//depot/google3/googledata/superroot/youtube/youtube_controversial_query_blacklist

Contradictions

Responding to a request for comment, a YouTube spokeswoman said the company wants to promote “authoritative” sources in its search results, but maintained that YouTube is a “platform for free speech” that “allow[s]” both pro-life and pro-abortion content.

YouTube’s full comment:

    YouTube is a platform for free speech where anyone can choose to post videos, as long as they follow our Community Guidelines, which prohibit things like inciting violence and pornography. We apply these policies impartially and we allow both pro-life and pro-choice opinions. Over the last year we’ve described how we are working to better surface news sources across our site for news-related searches and topical information. We’ve improved our search and discovery algorithms, built new features that clearly label and prominently surface news sources on our homepage and search pages, and introduced information panels to help give users more authoritative sources where they can fact check information for themselves.

The fact that Google manually curates politically contentious search results fits in with a wider pattern of political activity on the part of the tech giant.

In 2018, Breitbart News exclusively published a leaked video from the company that showed senior management in dismay at Trump’s election victory, and pledging to use the company’s power to make his populist movement a “hiccup” in history.

Breitbart also leaked “The Good Censor,” an internal research document from Google that admits the tech giant is engaged in the censorship of its own products, partly in response to political events.

Another leak revealed that employees within the company, including Google’s current director of Trust and Safety, tried to kick Breitbart News off Google’s market-dominating online ad platforms.

Yet another showed Google engaged in targeted turnout operations aimed to boost voter participation in pro-Democrat demographics in “key states” ahead of the 2016 election. The effort was dubbed a “silent donation” by a top Google employee.

Evidence for Google’s partisan activities is now overwhelming. President Trump has previously warned Google, as well as other Silicon Valley giants, not to engage in censorship or partisan activities. Google continues to defy him.

-----------------------------------------

HOW GOOGLE RIGS ELECTIONS AND CHARACTER ASSASSINATION ATTACKS AROUND THE GLOBE FOR GOOGLE VC'S POLITICAL IDEOLOGIES AND VENDETTAS

BY ROBERT EPSTEIN

Authorities in the UK have finally figured out that fake news stories and Russian-placed ads are not the real problem. The UK Parliament is about to impose stiff penalties—not on the people who place the ads or write the stories, but on the Big Tech platforms that determine which ads and stories people actually see.

Parliament’s plans will almost surely be energized by the latest leak of damning material from inside Google’s fortress of secrecy: The Wall Street Journal recently reported on emails exchanged among Google employees in January 2017 in which they strategized about how to alter Google search results and other “ephemeral experiences” to counter President Donald Trump’s newly imposed travel ban. The company claims that none of these plans was ever implemented, but who knows?

While U.S. authorities have merely held hearings, EU authorities have taken dramatic steps in recent years to limit the powers of Big Tech, most recently with a comprehensive law that protects user privacy—theGeneral Data Protection Regulation—and a whopping $5.1 billion fine against Google for monopolistic practices in the mobile device market. Last year, the European Union also levied a $2.7 billion fineagainst Google for filtering and ordering search results in a way that favored their own products and services. That filtering and ordering, it turns out, is of crucial importance.

As years of research I’ve been conducting on online influence has shown, content per se is not the real threat these days; what really matters is (a) which content is selected for users to see, and (b) the way that content is ordered in search results, search suggestions, newsfeeds, message feeds, comment lists, and so on. That’s where the power lies to shift opinions, purchases, and votes, and that power is held by a disturbingly small group of people.

I say “these days” because the explosive growth of a handful of massive platforms on the internet—the largest, by far, being Google and the next largest being Facebook—has changed everything. Millions of people and organizations are constantly trying to get their content in front of our eyes, but for more than 2.5 billion people around the world—soon to be more than 4 billion—the responsibility for what algorithms do should always lie with the people who wrote the algorithms and the companies that deployed them.

In randomized, controlled, peer-reviewed research I’ve conducted with thousands of people, I’ve shown repeatedly that when people are undecided, I can shift their opinions on just about any topic just by changing how I filter and order the information I show them. I’ve also shown that when, in multiple searches, I show people more and more information that favors one candidate, I can shift opinions even farther. Even more disturbing, I can do these things in ways that are completely invisible to people and in ways that don’t leave paper trails for authorities to trace.

Worse still, these new forms of influence often rely on ephemeral content—information that is generated on the fly by an algorithm and then disappears forever, which means that it would be difficult, if not impossible, for authorities to reconstruct. If, on Election Day this coming November, Mark Zuckerberg decides to broadcast go-out-and-vote reminders mainly to members of one political party, how would we be able to detect such a manipulation? If we can’t detect it, how would we be able to reduce its impact? And how, days or weeks later, would we be able to turn back the clock to see what happened?

Of course, companies like Google and Facebook emphatically reject the idea that their search and newsfeed algorithms are being tweaked in ways that could meddle in elections. Doing so would undermine the public’s trust in their companies, spokespeople have said. They insist that their algorithms are complicated, constantly changing, and subject to the “organic” activity of users.

This is, of course, sheer nonsense. Google can adjust its algorithms to favor any candidate it chooses no matter what the activity of users might be, just as easily as I do in my experiments. As legal scholar Frank Pasquale noted in his recent book “The Black Box Society,” blaming algorithms just doesn’t cut it; the responsibility for what an algorithm does should always lie with the people who wrote the algorithm and the companies that deployed the algorithm. Alan Murray, president of Fortune, recently framed the issue this way: “Rule one in the Age of AI: Humans remain accountable for decisions, even when made by machines.”

Given that 95 percent of donations from Silicon Valley generally go to Democrats, it’s hard to imagine that the algorithms of companies like Facebook and Google don’t favor their favorite candidates. A newly leaked video of a 2016 meeting at Google shows without doubt that high-ranking Google executives share a strong political preference, which could easily be expressed in algorithms. The favoritism might be deliberately programmed or occur simply because of unconscious bias. Either way, votes and opinions shift.

It’s also hard to imagine how, in any election in the world, with or without intention on the part of company employees, Google search results would fail to tilt toward one candidate. Google’s search algorithm certainly has no equal-time rule built into it; we wouldn’t want it to! We want it to tell us what’s best, and the algorithm will indeed always favor one dog food over another, one music service over another, and one political candidate over another. When the latter happens … votes and opinions shift.

Here are 10 ways—seven of which I am actively studying and quantifying—that Big Tech companies could use to shift millions of votes this coming November with no one the wiser. Let’s hope, of course, that these methods are not being used and will never be used, but let’s be realistic too; there’s generally no limit to what people will do when money and power are on the line.

1. Search Engine Manipulation Effect (SEME)
Ongoing research I began in January 2013 has shown repeatedly that when one candidate is favored over another in search results, voting preferences among undecided voters shift dramatically—by 20 percent or more overall, and by up to 80 percent in some demographic groups. This is partly because people place inordinate trust in algorithmically generated output, thinking, mistakenly, that algorithms are inherently objective and impartial.

But my research also suggests that we are conditioned to believe in high-ranking search results in much the same way that rats are conditioned to press levers in Skinner boxes. Because most searches are for simple facts (“When was Donald Trump born?”), and because correct answers to simple questions inevitably turn up in the first position, we are taught, day after day, that the higher a search result appears in the list, the more true it must be. When we finally search for information to help us make a tough decision (“Who’s better for the economy, Trump or Clinton?”), we tend to believe the information on the web pages to which high-ranking search results link.

As The Washington Post reported last year, in 2016, I led a team that developed a system for monitoring the election-related search results Google, Bing, and Yahoo were showing users in the months leading up to the presidential election, and I found pro-Clinton bias in all 10 search positions on the first page of Google’s search results. Google responded, as usual, that it has “never re-ranked search results on any topic (including elections) to manipulate political sentiment”—but I never claimed it did. I found what I found, namely that Google’s search results favored Hillary Clinton; “re-ranking”—an obtuse term Google seems to have invented to confuse people—is irrelevant.

Because (a) many elections are very close, (b) 90 percent of online searches in most countries are conducted on just one search engine (Google), and (c) internet penetration is high in most countries these days—higher in many countries than it is in the United States—it is possible that the outcomes ofupwards of 25 percent of the world’s national elections are now being determined by Google’s search algorithm, even without deliberate manipulation on the part of company employees. Because, as I noted earlier, Google’s search algorithm is not constrained by equal-time rules, it almost certainly ends up favoring one candidate over another in most political races, and that shifts opinions and votes.

2. Search Suggestion Effect (SSE)
When Google first introduced autocomplete search suggestions—those short lists you see when you start to type an item into the Google search bar—it was supposedly meant to save you some time. Whatever the original rationale, those suggestions soon turned into a powerful means of manipulation that Google appears to use aggressively.

My recent research suggests that (a) Google starts to manipulate your opinions from the very first character you type, and (b) by fiddling with the suggestions it shows you, Google can turn a 50–50 split among undecided voters into a 90–10 split with no one knowing. I call this manipulation the Search Suggestion Effect (SSE), and it is one of the most powerful behavioral manipulations I have ever seen in my nearly 40 years as a behavioral scientist.

How will you know whether Google is messing with your election-related search suggestions in the weeks leading up to the election? You won’t.

3. The Targeted Messaging Effect (TME)
If, on Nov. 8, 2016, Mr. Zuckerberg had sent go-out-and-vote reminders just to supporters of Mrs. Clinton, that would likely have given her an additional 450,000 votes. I’ve extrapolated that number from Facebook’s own published data.

Because Zuckerberg was overconfident in 2016, I don’t believe he sent those messages, but he is surely not overconfident this time around. In fact, it’s possible that, at this very moment, Facebook and other companies are sending out targeted register-to-vote reminders, as well as targeted go-out-and-vote reminders in primary races. Targeted go-out-and-vote reminders might also favor one party on Election Day in November.

My associates and I are building systems to monitor such things, but because no systems are currently in place, there is no sure way to tell whether Twitter, Google, and Facebook (or Facebook’s influential offshoot, Instagram) are currently tilting their messaging. No law or regulation specifically forbids the practice, and it would be an easy and economical way to serve company needs. Campaign donations cost money, after all, but tilting your messaging to favor one candidate is free.

4. Opinion Matching Effect (OME)
In March 2016, and continuing for more than seven months until Election Day, Tinder’s tens of millions of users could not only swipe to find sex partners, they could also swipe to find out whether they should vote for Trump or Clinton. The website iSideWith.com—founded and run by “two friends” with no obvious qualifications—claims to have helped more than 49 million people match their opinions to the right candidate. Both CNN and USA Today have run similar services, currently inactive.

I am still studying and quantifying this type of, um, helpful service, but so far it looks like (a) opinion matching services tend to attract undecided voters—precisely the kinds of voters who are most vulnerable to manipulation, and (b) they can easily produce opinion shifts of 30 percent or more without people’s awareness.

At this writing, iSideWith is already helping people decide who they should vote for in the 2018 New York U.S. Senate race, the 2018 New York gubernatorial race, the 2018 race for New York District 10 of the U.S. House of Representatives, and, believe it or not, the 2020 presidential race. Keep your eyes open for other matching services as they turn up, and ask yourself this: Who wrote those algorithms, and how can we know whether they are biased toward one candidate or party?

5. Answer Bot Effect (ABE)
More and more these days, people don’t want lists of thousands of search results, they just want the answer, which is being supplied by personal assistants like Google Home devices, the Google Assistant on Android devices, Amazon’s Alexa, Apple’s Siri, and Google’s featured snippets—those answer boxesat the top of Google search results. I call the opinion shift produced by such mechanisms the Answer Bot Effect (ABE).

My research on Google’s answer boxes shows three things so far: First, they reduce the time people spend searching for more information. Second, they reduce the number of times people click on search results. And third, they appear to shift opinions 10 to 30 percent more than search results alone do. I don’t yet know exactly how many votes can be shifted by answer bots, but in a national election in the United States, the number might be in the low millions.

6. Shadowbanning
Recently, Trump complained that Twitter was preventing conservatives from reaching many of their followers on that platform through shadowbanning, the practice of quietly hiding a user’s posts without the user knowing. The validity of Trump’s specific accusation is arguable, but the fact remains that any platform on which people have followers or friends can be rigged in a way to suppress the views and influence of certain individuals without people knowing the suppression is taking place. Unfortunately, without aggressive monitoring systems in place, it’s hard to know for sure when or even whether shadowbanning is occurring.

7. Programmed Virality and the Digital Bandwagon Effect
Big Tech companies would like us to believe that virality on platforms like YouTube or Instagram is a profoundly mysterious phenomenon, even while acknowledging that their platforms are populated by tens of millions of fake accounts that might affect virality.

In fact, there is an obvious situation in which virality is not mysterious at all, and that is when the tech companies themselves decide to shift high volumes of traffic in ways that suit their needs. And aren’t they always doing this? Because Facebook’s algorithms are secret, if an executive decided to bestow instant Instagram stardom on a pro-Elizabeth Warren college student, we would have no way of knowing that this was a deliberate act and no way of countering it.

The same can be said of the virality of YouTube videos and Twitter campaigns; they are inherently competitive—except when company employees or executives decide otherwise. Google has an especially powerful and subtle way of creating instant virality using a technique I’ve dubbed the Digital Bandwagon Effect. Because the popularity of websites drives them higher in search results, and because high-ranking search results increase the popularity of websites (SEME), Google has the ability to engineer a sudden explosion of interest in a candidate or cause with no one—perhaps even people at the companies themselves—having the slightest idea they’ve done so. In 2015, I published a mathematical model showing how neatly this can work.

8. The Facebook Effect
Because Facebook’s ineptness and dishonesty have squeezed it into a digital doghouse from which it might never emerge, it gets its own precinct on my list.

In 2016, I published an article detailing five ways that Facebook could shift millions of votes without people knowing: biasing its trending box, biasing its center newsfeed, encouraging people to look for election-related material in its search bar (which it did that year!), sending out targeted register-to-vote reminders, and sending out targeted go-out-and-vote reminders.

I wrote that article before the news stories broke about Facebook’s improper sharing of user data with multiple researchers and companies, not to mention the stories about how the company permitted fake news stories to proliferate on its platform during the critical days just before the November election—problems the company is now trying hard to mitigate. With the revelations mounting, on July 26, 2018, Facebook suffered the largest one-day drop in stock value of any company in history, and now it’s facing a shareholder lawsuit and multiple fines and investigations in both the United States and the EU.
Facebook desperately needs new direction, which is why I recently called for Zuckerberg’s resignation. The company, in my view, could benefit from the new perspectives that often come with new leadership.

9. Censorship
I am cheating here by labeling one category “censorship,” because censorship—the selective and biased suppression of information—can be perpetrated in so many different ways.

Shadowbanning could be considered a type of censorship, for example, and in 2016, a Facebook whistleblower claimed he had been on a company team that was systematically removing conservative news stories from Facebook’s newsfeed. Now, because of Facebook’s carelessness with user data, the company is openly taking pride in rapidly shutting down accounts that appear to be Russia-connected—even though company representatives sometimes acknowledge that they “don’t have all the facts.”

Meanwhile, Zuckerberg has crowed about his magnanimity in preserving the accounts of people who deny the Holocaust, never mentioning the fact that provocative content propels traffic that might make him richer. How would you know whether Facebook was selectively suppressing material that favored one candidate or political party? You wouldn’t. (For a detailed look at nine ways Google censors content, see my essay “The New Censorship,” published in 2016.)

10. The Digital Customization Effect (DCE)
Any marketer can tell you how important it is to know your customer. Now, think about that simple idea in a world in which Google has likely collected the equivalent of millions of Word pages of information about you. If you randomly display a banner ad on a web page, out of 10,000 people, only five are likely to click on it; that’s the CTR—the “clickthrough rate” (0.05 percent). But if you target your ad, displaying it only to people whose interests it matches, you can boost your CTR a hundredfold.

That’s why Google, Facebook, and others have become increasingly obsessed with customizing the information they show you: They want you to be happily and mindlessly clicking away on the content they show you.
In the research I conduct, my impact is always larger when I am able to customize information to suit people’s backgrounds. Because I know very little about the participants in my experiments, however, I am able to do so in only feeble ways, but the tech giants know everything about you—even things you don’t know about yourself. This tells me that the effect sizes I find in my experiments are probably too low. The impact that companies like Google are having on our lives is quite possibly much larger than I think it is. Perhaps that doesn’t scare you, but it sure scares me.

The Same Direction

OK, you say, so much for Epstein’s list! What about those other shenanigans we’ve heard about: voter fraud (Trump’s explanation for why he lost the popular vote), gerrymandering, rigged voting machines, targeted ads placed by Cambridge Analytica, votes cast over the internet, or, as I mentioned earlier, those millions of bots designed to shift opinions. What about hackers like Andrés Sepúlveda, who spent nearly a decade using computer technology to rig elections in Latin America? What about all the ways new technologies make dirty tricks easier in elections? And what about those darn Russians, anyway?
To all that I say: kid stuff. Dirty tricks have been around since the first election was held millennia ago. But unlike the new manipulative tools controlled by Google and Facebook, the old tricks are competitive—it’s your hacker versus my hacker, your bots versus my bots, your fake news stories versus my fake news stories—and sometimes illegal, which is why Sepúlveda’s efforts failed many times and why Cambridge Analytica is dust.

“Cyberwar,” a new book by political scientist Kathleen Hall Jamieson, reminds us that targeted ads and fake news stories can indeed shift votes, but the numbers are necessarily small. It’s hard to overwhelm your competitor when he or she can play the same games you are playing.

Now, take a look at my numbered list. The techniques I’ve described can shift millions of votes without people’s awareness, and because they are controlled by the platforms themselves, they are entirely noncompetitive. If Google or Facebook or Twitter wants to shift votes, there is no way to counteract their manipulations. In fact, at this writing, there is not even a credible way of detecting those manipulations.

And what if the tech giants are all leaning in the same political direction? What if the combined weight of their subtle and untraceable manipulative power favors one political party? If 150 million people vote this November in the United States, with 20 percent still undecided at this writing (that’s 30 million people), I estimate that the combined weight of Big Tech manipulations could easily shift upwards of 12 million votes without anyone knowing. That’s enough votes to determine the outcomes of hundreds of close local, state, and congressional races throughout the country, which makes the free-and-fair election little more than an illusion.

Full disclosure: I happen to think that the political party currently in favor in Silicon Valley is, by a hair (so to speak), the superior party at the moment. But I also love America and democracy, and I believe that the free-and-fair election is the bedrock of our political system. I don’t care how “right” these companies might be; lofty ends do not justify shady means, especially when those means are difficult to see and not well understood by either authorities or the public.

Can new regulations or laws save us from the extraordinary powers of manipulation the Big Tech companies now possess? Maybe, but our leaders seem to be especially regulation-shy these days, and I doubt, in any case, whether laws and regulations will ever be able to keep up with the new kinds of threats that new technologies will almost certainly pose in coming years.

I don’t believe we are completely helpless, however. I think that one way to turn Facebook, Google, and the innovative technology companies that will succeed them, into responsible citizens is to set upsophisticated monitoring systems that detect, analyze, and archive what they’re showing people—in effect, to fight technology with technology.

As I mentioned earlier, in 2016, I led a team that monitored search results on multiple search engines. That was a start, but we can do much better. These days, I’m working with business associates and academic colleagues on three continents to scale up systems to monitor a wide range of information the Big Tech companies are sharing with their users—even the spoken answers provided by personal assistants. Ultimately, a worldwide ecology of passive monitoring systems will make these companies accountable to the public, with information bias and online manipulation detectable in real time.

With November drawing near, there is obviously some urgency here. At this writing, it’s not clear whether we will be fully operational in time to monitor the midterm elections, but we’re determined to be ready for 2020.

- Robert Epstein is a senior research psychologist at the American Institute for Behavioral Research and Technology in California. Epstein, who holds a doctorate from Harvard University, is the former editor-in-chief of Psychology Today and has published 15 books and more than 300 articles on internet influence and other topics. He is currently working on a book called “Technoslavery: Invisible Influence in the Internet Age and Beyond.” His research is featured in the new documentary “The Creepy Line.” You can find him on Twitter @DrREpstein.

-------------------------------------------


HERE IS WHAT YOU MUST KNOW ABOUT HOW GOOGLE IS USING YOUR ELECTRONICS TO ABUSE YOUR HUMAN SOCIAL RIGHTS, PROFITEER OFF YOUR PRIVACY AND PLAY MIND-GAMES WITH POLITICS:

With daily headlines about Google's Big Tech scandals, sex cults and clandestine data-sharing, there’s no better time to read up on these topics.

The choices below are listed in no particular order and, wherever possible, we link to author websites and privacy-respecting sources.

Reading about surveillance capitalism may not warm your heart, but it could put a fire in your belly and encourage you to #DemandFreedom in 2019.
The End of Trust – McSweeney’s Issue 54 (Nov. 2018)

Compiled by the team at the Electronic Frontier Foundation (EFF) for McSweeney’s, this collection features writing by luminaries like Cory Doctorow, Gabriella Coleman, Edward Snowden, Bruce Schneier, and many more. Among the gems within is a conversation between artist Trevor Paglen and journalist Julia Angwin, with Paglen having this to say about the intersection of freedom and privacy:

“I think I had the sense of growing up within structures that didn’t work for me and feeling like there was a deep injustice around that. Feeling like the world was set up to move you down certain paths and to enforce certain behaviors and norms [didn’t] work for me, and realizing that the value of this word formerly known as privacy, otherwise known as liberty, plays not only at the scale of the individual, but also as a kind of public resource that allows for the possibility of, on one hand, experimentation, but then, on the other hand, things like civil liberties and self-representation.”
Click Here to Kill Everybody: Security and Survival in a Hyper-connected World – Bruce Schneier, W. W. Norton & Company (Sep. 2018)

Schneier’s latest book is a sobering account of the pitfalls of modern technology. It covers a lot of ground, such as the huge gap between security and implementation in Internet-of-Things devices. The author has a gift for raising questions that cause the reader to rethink the underlying technology behind seemingly-simple tech, like network-connected baby monitors:

“They’re surveillance devices by design, and can pick up a lot more than a baby’s cries. Of course, I had a lot of security questions. How is the audio and video transmission secured? What’s the encryption algorithm? How are encryption keys generated, and who has copies of them? If data is stored on the cloud, how long is it stored and how is it secured? How does the smartphone app, if the monitor uses one, authenticate to the cloud server?”
American Spies: Modern Surveillance, Why You Should Care, and What to Do About It – Jennifer Stisa Granick, Cambridge University Press (Jan. 2017)

Granick gives the reader a real sense of just how big, and just how pervasive, U.S. intelligence programs really are. The author doesn’t stop with government programs, however, and calls out Big Tech for its major role in population surveillance:

“Spying is thriving not only because of technology, but also because of modern business models. Much of the modern privacy problem is the result of people giving up their data – knowingly or otherwise – to obtain cool new products and services.”
Nothing to Hide: The False Tradeoff between Privacy and Security – Daniel J. Solove, Yale University Press (Jan. 2013)

This is a now-classic rumination on the deeply important role of privacy in autonomy and freedom. It quickly demolishes the “nothing to hide argument”, a constant refrain in today’s privacy debates, and continues to shed light on social and legal dimensions of surveillance. Here, Solove highlights contradictory perceptions of audio and video snooping:

“The electronic-surveillance statutes strongly protect against the government’s eavesdropping on your conversations but don’t protect against the government’s watching you. This distinction doesn’t make a lot of sense. Video surveillance involves similar threats to privacy as audio surveillance. As one court noted: ‘Television surveillance is identical in its indiscriminate character to wiretapping and bugging. It is even more invasive of privacy… but it is not more indiscriminate: the microphone is as ‘dumb’ as the television camera; both devices pick up anything within their electronic reach, however irrelevant to the investigation.'”
Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance – Julia Angwin, Times Books (Feb. 2014)

Angwin is no stranger to the many facets of surveillance capitalism, and this book is just as prescient now as it was five years ago. In that time, the author’s concerns have been validated, with the pace of Big Tech’s blunders only escalating. Angwin keeps the human element in constant view, giving vital context to headlines about privacy and data catastrophes:

“Skeptics say: ‘What’s wrong with all of our data being collected by unseen watchers? Who is being harmed?’ Admittedly, it can be difficult to demonstrate personal harm from a data breach. If Sharon or Bilal is denied a job or insurance, they may never know which piece of data caused the denial. People placed on the no-fly list are never informed about the data that contributed to the decision. But, on a larger scale, the answer is simple: troves of personal data can and will be abused.”
Free Software, Free Society, 3rd Edition – Richard M. Stallman, Free Software Foundation (Oct. 2015)

Stallman’s status as an icon in the Free/Libre world is often the focus of press. Bootstrapping GNU and the Free Software movement was no small feat, but there is too little focus on Stallman’s writing. The author’s philosophy is grounded in practical concerns and explained with a clear and mindful tone that few writers possess. This most recent edition of Stallman’s collected essays describes just how important liberty is in the contemporary digital context:

“If ‘cloud computing’ has a meaning, it is not a way of doing computing, but rather a way of thinking about computing: a devil-may-care approach which says, ‘Don’t ask questions. Don’t worry about who controls your computing or who holds your data. Don’t check for a hook hidden inside our service before you swallow it. Trust companies without hesitation.’ In other words, ‘Be a sucker.’”
Defending Politically Vulnerable Organizations Online – Sean Brooks, Center for Long-Term Cybersecurity (July 2018)

In this report from the Center for Long-Term Cybersecurity (CLTC), Brooks provides a broad overview of the cybersecurity landscape. This is a great introduction for industry professionals and consumers alike, though it focuses on civil organizations that are often targeted for political reasons. The report’s citations are a valuable resource in their own right, providing context as well as technological solutions. The author is quick to point out lackluster investment in cybersecurity in both the public and private spheres, describing the vicious cycle this creates:

“The broad asymmetry between attackers and defenders online is unsurprising; politically vulnerable organizations lack resources and are therefore particularly under-protected. This problem is not unique to politically vulnerable organizations. Many public and private organizations have underinvested in cybersecurity and have become soft targets for criminals and other bad actors. Online attackers have continued to develop their offensive capabilities, exacerbating the mismatch.”
Bad Blood: Secrets and Lies in a Silicon Valley Startup – John Carreyrou, Penguin Random House (May 2018)

This story of the rise and fall of biotech startup Theranos is a page-turner, described here with all the detail of investigative journalism. Carreyrou’s most interesting passages are those where the author describes the culture of Silicon Valley, where fraudulent CEO Elizabeth Holmes was desperately trying to fill the mold of her Big Tech heroes:

“For a young entrepreneur building a business in the heart of Silicon Valley, it was hard to escape the shadow of Steve Jobs. By 2007, Apple’s founder had cemented his legend in the technology world and in American society at large… to anyone who spent time with Elizabeth, it was clear that she worshipped Jobs and Apple.”
The Doomsday Machine: Confessions of a Nuclear War Planner – Daniel Ellsberg, Bloomsbury USA (Dec. 2017)

Decades after the legendary whistleblower disclosed the Pentagon Papers to the American public, Ellsberg’s warnings will still ring alarm bells and shock the reader. Through first-hand accounts, the author chronicles the nuclear program of the 1960’s and the dangers of the present day, describing the contrasting roles of secrecy and transparency, as well as their relationship to trust:

“Like discussion of covert operations and assassination plots, nuclear war plans and threats are taboo for public discussion by the small minority of officials and consultants who know anything about them. In addition to their own sense of identity as trustworthy keepers of these most-sensitive secrets, there is a strong careerist aspect to their silence.”
The Participatory Condition in the Digital Age – Electronic Mediations Book 51 (Nov. 2016)

This collection of articles spans the gamut from street protests to online “hacktivism” to Free and Open-Source collaboration. The editors expertly summarize the transdisciplinary tone of the volume in an introductions that’s worth contemplating in its own right. Among other issues, Gabriella Coleman describes Kate Crawford’s work on the power and scale of spying:

“Ubiquitous surveillance facilitated by [information and communications technology or ICTs] – what Crawford designates as ‘algorithmic listening’ – and the gathering of personal data currently operated by web-based corporations (commercial surveillance) and governments (the NSA program, for example) are not simply matters of privacy but also of scale and lack of accountability.”
Privacy and Big Data: The Players, Regulators, and Stakeholders – Terence Craig & Mary Ludloff, O’Reilly Media (Sep. 2011)

Published at a time when “Big Data” was more of a buzzword than a factor of everyday life, this book is a quick and easy introduction to the perils of the data economy. The lessons would seem dated if they weren’t still applicable, and there’s perhaps nothing more prescient than the fact that data can not only be sold by Big Tech to business partners, it can be given away:

“While the IP stakeholders have been busy redefining “privacy” for their own ends, Google, Yahoo, Facebook, and others have been equally busy making billions of dollars collecting your data and using it for targeted advertising. Of course, any company or organization that collects data can offer it for sale or free.”
Habeas Data: Privacy vs. the Rise of Surveillance Tech – Cyrus Farivar, Melville House (May 2018)

Farivar exposes the role of common, household tech in the global surveillance apparatus, diving into the court cases and legal precedent that shapes the scope and limits of privacy and security. Above all, the author steeps his analysis in history, with quotes from legal heavyweights like Louis Brandeis, here discussing wiretaps in a famous dissenting opinion:

“‘The progress of science in furnishing the Government with means of espionage is not likely to stop with wiretapping,’ Brandeis wrote. ‘Ways may someday be developed by which the Government, without removing papers from secret drawers, can reproduce them in court, and by which it will be enabled to expose to a jury the most intimate occurences of the home.'”

Dr. Robert Epstein: Google Has the ‘Power to Flip ...

“That’s right and of course in most elections, especially close ones, it’s the undecided people who determine the outcome of the election, so if you can swing a lot of undecided people — and Google has at least three ways to do that that we’re studying,” responded Epstein.
bing google cached proxied
https://www.breitbart.com/tech/2018/05/03/robert-epstein/
How Google Could Rig the 2016 Election - POLITICO Magazine

 Google can drive millions of votes to a candidate with no one the wiser. ... By ROBERT EPSTEIN. August 19 ... of each candidate and then asked how much they liked and trusted each candidate and whom they would vote for.
google cached proxied
https://www.politico.com/magazine/stor[...]gle-could-rig-the-2016-election-121548
Dr. Robert Epstein Discusses The Battle for Your …

Boogywstew, based on voter turnout numbers for Republicans, it appears as though many stayed home as Romney wasn’t able to get the voters fired up.
bing cached proxied
https://theconservativetreehouse.com/2[...]in-discusses-the-battle-for-your-mind/
Dr. Robert Epstein: Research Documents Google Search - Breitbart

Psychologist Dr. Robert Epstein appeared on SiriusXM Patriot's Breitbart News Sunday to discuss ... Google encourages users to "go vote" ...
google cached proxied
https://www.breitbart.com/tech/2018/12[...]s-most-vulnerable-to-google-influence/
Google's Search Algorithm Could Steal the Presidency | WIRED

Epstein's paper combines a few years' worth of experiments in which Epstein ... The team calls that number the "vote manipulation power," or VMP. ... It'd be easy to go all 1970s-political-thriller on this research, to assume that ...
google cached proxied
https://www.wired.com/2015/08/googles-search-algorithm-steal-presidency/
EXCLUSIVE -- Research: Google Search …

His latest research looks at how search engines can affect voters by suggesting negative or positive search terms when a political candidate’s name is entered into the search bar. Dr. Epstein’s research found that when negative search terms are suggested for a …
bing cached proxied
https://www.breitbart.com/tech/2018/04[...]ng-nearly-80-percent-undecided-voters/
'Google has power to control elections, can shift millions of votes to ...

People trust the “unbiased” internet search giant Google so much it can ... Clinton for president, prominent US psychologist and author Robert Epstein told ... All they have to do is send out “Go out and vote” reminders to Hillary ...
google cached proxied
https://www.rt.com/op-ed/364910-robert-epstein-google-hillary-clinton/